!pip install yfinance
!pip install bs4
!pip install plotly
!pip install nbconvert
Requirement already satisfied: yfinance in c:\users\sarah\anaconda3\lib\site-packages (0.1.67) Requirement already satisfied: requests>=2.20 in c:\users\sarah\anaconda3\lib\site-packages (from yfinance) (2.26.0) Requirement already satisfied: lxml>=4.5.1 in c:\users\sarah\anaconda3\lib\site-packages (from yfinance) (4.6.3) Requirement already satisfied: pandas>=0.24 in c:\users\sarah\anaconda3\lib\site-packages (from yfinance) (1.3.4) Requirement already satisfied: multitasking>=0.0.7 in c:\users\sarah\anaconda3\lib\site-packages (from yfinance) (0.0.10) Requirement already satisfied: numpy>=1.15 in c:\users\sarah\anaconda3\lib\site-packages (from yfinance) (1.20.3) Requirement already satisfied: pytz>=2017.3 in c:\users\sarah\anaconda3\lib\site-packages (from pandas>=0.24->yfinance) (2021.3) Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\sarah\anaconda3\lib\site-packages (from pandas>=0.24->yfinance) (2.8.2) Requirement already satisfied: six>=1.5 in c:\users\sarah\anaconda3\lib\site-packages (from python-dateutil>=2.7.3->pandas>=0.24->yfinance) (1.16.0) Requirement already satisfied: certifi>=2017.4.17 in c:\users\sarah\anaconda3\lib\site-packages (from requests>=2.20->yfinance) (2021.10.8) Requirement already satisfied: idna<4,>=2.5 in c:\users\sarah\anaconda3\lib\site-packages (from requests>=2.20->yfinance) (3.2) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\sarah\anaconda3\lib\site-packages (from requests>=2.20->yfinance) (1.26.7) Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\sarah\anaconda3\lib\site-packages (from requests>=2.20->yfinance) (2.0.4) Requirement already satisfied: bs4 in c:\users\sarah\anaconda3\lib\site-packages (0.0.1) Requirement already satisfied: beautifulsoup4 in c:\users\sarah\anaconda3\lib\site-packages (from bs4) (4.10.0) Requirement already satisfied: soupsieve>1.2 in c:\users\sarah\anaconda3\lib\site-packages (from beautifulsoup4->bs4) (2.2.1) Requirement already satisfied: plotly in c:\users\sarah\anaconda3\lib\site-packages (5.8.0) Requirement already satisfied: tenacity>=6.2.0 in c:\users\sarah\anaconda3\lib\site-packages (from plotly) (8.0.1) Requirement already satisfied: nbconvert in c:\users\sarah\anaconda3\lib\site-packages (6.1.0) Requirement already satisfied: jupyter-core in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (4.8.1) Requirement already satisfied: pandocfilters>=1.4.1 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (1.4.3) Requirement already satisfied: nbformat>=4.4 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (5.1.3) Requirement already satisfied: nbclient<0.6.0,>=0.5.0 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.5.3) Requirement already satisfied: traitlets>=5.0 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (5.1.0) Requirement already satisfied: jupyterlab-pygments in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.1.2) Requirement already satisfied: defusedxml in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.7.1) Requirement already satisfied: pygments>=2.4.1 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (2.10.0) Requirement already satisfied: bleach in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (4.0.0) Requirement already satisfied: mistune<2,>=0.8.1 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.8.4) Requirement already satisfied: testpath in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.5.0) Requirement already satisfied: jinja2>=2.4 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (2.11.3) Requirement already satisfied: entrypoints>=0.2.2 in c:\users\sarah\anaconda3\lib\site-packages (from nbconvert) (0.3) Requirement already satisfied: MarkupSafe>=0.23 in c:\users\sarah\anaconda3\lib\site-packages (from jinja2>=2.4->nbconvert) (1.1.1) Requirement already satisfied: async-generator in c:\users\sarah\anaconda3\lib\site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert) (1.10) Requirement already satisfied: nest-asyncio in c:\users\sarah\anaconda3\lib\site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert) (1.5.1) Requirement already satisfied: jupyter-client>=6.1.5 in c:\users\sarah\anaconda3\lib\site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert) (6.1.12) Requirement already satisfied: tornado>=4.1 in c:\users\sarah\anaconda3\lib\site-packages (from jupyter-client>=6.1.5->nbclient<0.6.0,>=0.5.0->nbconvert) (6.1) Requirement already satisfied: python-dateutil>=2.1 in c:\users\sarah\anaconda3\lib\site-packages (from jupyter-client>=6.1.5->nbclient<0.6.0,>=0.5.0->nbconvert) (2.8.2) Requirement already satisfied: pyzmq>=13 in c:\users\sarah\anaconda3\lib\site-packages (from jupyter-client>=6.1.5->nbclient<0.6.0,>=0.5.0->nbconvert) (22.2.1) Requirement already satisfied: pywin32>=1.0 in c:\users\sarah\anaconda3\lib\site-packages (from jupyter-core->nbconvert) (228) Requirement already satisfied: ipython-genutils in c:\users\sarah\anaconda3\lib\site-packages (from nbformat>=4.4->nbconvert) (0.2.0) Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in c:\users\sarah\anaconda3\lib\site-packages (from nbformat>=4.4->nbconvert) (3.2.0) Requirement already satisfied: six>=1.11.0 in c:\users\sarah\anaconda3\lib\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.4->nbconvert) (1.16.0) Requirement already satisfied: setuptools in c:\users\sarah\anaconda3\lib\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.4->nbconvert) (58.0.4) Requirement already satisfied: pyrsistent>=0.14.0 in c:\users\sarah\anaconda3\lib\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.4->nbconvert) (0.18.0) Requirement already satisfied: attrs>=17.4.0 in c:\users\sarah\anaconda3\lib\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.4->nbconvert) (21.2.0) Requirement already satisfied: webencodings in c:\users\sarah\anaconda3\lib\site-packages (from bleach->nbconvert) (0.5.1) Requirement already satisfied: packaging in c:\users\sarah\anaconda3\lib\site-packages (from bleach->nbconvert) (21.0) Requirement already satisfied: pyparsing>=2.0.2 in c:\users\sarah\anaconda3\lib\site-packages (from packaging->bleach->nbconvert) (3.0.4)
import yfinance as yf
import pandas as pd
import requests
from bs4 import BeautifulSoup
import plotly.graph_objects as go
from plotly.subplots import make_subplots
In this section,the function make_graph takes a dataframe with stock data (dataframe must contain Date and Close columns), a dataframe with revenue data (dataframe must contain Date and Revenue columns), and the name of the stock.
def make_graph(stock_data, revenue_data, stock):
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, subplot_titles=("Historical Share Price", "Historical Revenue"), vertical_spacing = .3)
stock_data_specific = stock_data[stock_data.Date <= '2021--06-14']
revenue_data_specific = revenue_data[revenue_data.Date <= '2021-04-30']
fig.add_trace(go.Scatter(x=pd.to_datetime(stock_data_specific.Date, infer_datetime_format=True), y=stock_data_specific.Close.astype("float"), name="Share Price"), row=1, col=1)
fig.add_trace(go.Scatter(x=pd.to_datetime(revenue_data_specific.Date, infer_datetime_format=True), y=revenue_data_specific.Revenue.astype("float"), name="Revenue"), row=2, col=1)
fig.update_xaxes(title_text="Date", row=1, col=1)
fig.update_xaxes(title_text="Date", row=2, col=1)
fig.update_yaxes(title_text="Price ($US)", row=1, col=1)
fig.update_yaxes(title_text="Revenue ($US Millions)", row=2, col=1)
fig.update_layout(showlegend=False,
height=900,
title=stock,
xaxis_rangeslider_visible=True)
fig.show()
Using the Ticker function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is Tesla and its ticker symbol is TSLA.
tesla = yf.Ticker('TSLA')
Using the ticker object and the function history extract stock information and save it in a dataframe named tesla_data. Set the period parameter to max so we get information for the maximum amount of time.
tesla_data = tesla.history(period = "max")
Reset the index using the reset_index(inplace=True) function on the tesla_data DataFrame and display the first five rows of the tesla_data dataframe using the head function. Take a screenshot of the results and code from the beginning of Question 1 to the results below.
tesla_data.reset_index(inplace=True)
tesla_data.head(5)
| Date | Open | High | Low | Close | Volume | Dividends | Stock Splits | |
|---|---|---|---|---|---|---|---|---|
| 0 | 2010-06-29 | 3.800 | 5.000 | 3.508 | 4.778 | 93831500 | 0 | 0.0 |
| 1 | 2010-06-30 | 5.158 | 6.084 | 4.660 | 4.766 | 85935500 | 0 | 0.0 |
| 2 | 2010-07-01 | 5.000 | 5.184 | 4.054 | 4.392 | 41094000 | 0 | 0.0 |
| 3 | 2010-07-02 | 4.600 | 4.620 | 3.742 | 3.840 | 25699000 | 0 | 0.0 |
| 4 | 2010-07-06 | 4.000 | 4.000 | 3.166 | 3.222 | 34334500 | 0 | 0.0 |
Use the requests library to download the webpage https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue. Save the text of the response as a variable named html_data.
url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2021-01-01"
html_data = requests.get(url).text
Parse the html data using beautiful_soup.
beautiful_soup = BeautifulSoup(html_data, 'html5lib')
Using BeautifulSoup or the read_html function extract the table with Tesla Quarterly Revenue and store it into a dataframe named tesla_revenue. The dataframe should have columns Date and Revenue.
tesla_revenue = pd.DataFrame(columns = ['Date', 'Revenue'])
for row in beautiful_soup.find_all("tbody")[1].find_all("tr"):
col = row.find_all("td")
date = col[0].text
revenue = col[1].text.replace("$", "").replace(",", "")
tesla_revenue = tesla_revenue.append({"Date": date, "Revenue": revenue}, ignore_index = True)
tesla_revenue.dropna(inplace=True)
tesla_revenue = tesla_revenue[tesla_revenue['Revenue'] != ""]
Display the last 5 row of the tesla_revenue dataframe using the tail function. Take a screenshot of the results.
tesla_revenue.tail(5)
| Date | Revenue | |
|---|---|---|
| 46 | 2010-09-30 | 31 |
| 47 | 2010-06-30 | 28 |
| 48 | 2010-03-31 | 21 |
| 50 | 2009-09-30 | 46 |
| 51 | 2009-06-30 | 27 |
Using the Ticker function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is GameStop and its ticker symbol is GME.
gme = yf.Ticker('GME')
Using the ticker object and the function history extract stock information and save it in a dataframe named gme_data. Set the period parameter to max so we get information for the maximum amount of time.
gme_data = gme.history(period = "max")
Reset the index using the reset_index(inplace=True) function on the gme_data DataFrame and display the first five rows of the gme_data dataframe using the head function. Take a screenshot of the results and code from the beginning of Question 3 to the results below.
gme_data.reset_index(inplace=True)
gme_data.head()
| Date | Open | High | Low | Close | Volume | Dividends | Stock Splits | |
|---|---|---|---|---|---|---|---|---|
| 0 | 2002-02-13 | 6.480513 | 6.773399 | 6.413183 | 6.766665 | 19054000 | 0.0 | 0.0 |
| 1 | 2002-02-14 | 6.850829 | 6.864295 | 6.682504 | 6.733001 | 2755400 | 0.0 | 0.0 |
| 2 | 2002-02-15 | 6.733003 | 6.749835 | 6.632008 | 6.699338 | 2097400 | 0.0 | 0.0 |
| 3 | 2002-02-19 | 6.665671 | 6.665671 | 6.312189 | 6.430017 | 1852600 | 0.0 | 0.0 |
| 4 | 2002-02-20 | 6.463683 | 6.648840 | 6.413185 | 6.648840 | 1723200 | 0.0 | 0.0 |
Use the requests library to download the webpage https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork/labs/project/stock.html. Save the text of the response as a variable named html_data.
url ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork/labs/project/stock.html"
html_data = requests.get(url).text
Parse the html data using beautiful_soup.
s = BeautifulSoup(html_data, 'html5lib')
Using BeautifulSoup or the read_html function extract the table with GameStop Quarterly Revenue and store it into a dataframe named gme_revenue. The dataframe should have columns Date and Revenue. Make sure the comma and dollar sign is removed from the Revenue column using a method similar to what you did in Question 2.
gme_revenue = pd.DataFrame(columns = ['Date', 'Revenue'])
for row in s.find_all("tbody")[1].find_all("tr"):
col = row.find_all("td")
date = col[0].text
revenue = col[1].text.replace("$", "").replace(",", "")
gme_revenue = gme_revenue.append({"Date": date, "Revenue": revenue}, ignore_index = True)
Display the last five rows of the gme_revenue dataframe using the tail function. Take a screenshot of the results.
gme_revenue.tail()
| Date | Revenue | |
|---|---|---|
| 57 | 2006-01-31 | 1667 |
| 58 | 2005-10-31 | 534 |
| 59 | 2005-07-31 | 416 |
| 60 | 2005-04-30 | 475 |
| 61 | 2005-01-31 | 709 |
Use the make_graph function to graph the Tesla Stock Data, also provide a title for the graph. The structure to call the make_graph function is make_graph(tesla_data, tesla_revenue, 'Tesla'). Note the graph will only show data upto June 2021.
make_graph(tesla_data, tesla_revenue, 'Tesla')
Use the make_graph function to graph the GameStop Stock Data, also provide a title for the graph. The structure to call the make_graph function is make_graph(gme_data, gme_revenue, 'GameStop'). Note the graph will only show data upto June 2021.
make_graph(gme_data, gme_revenue, 'GameStop')